Video event extraction aims to detect salient events from a video and identify the arguments for each event as well as their semantic roles. Existing methods focus on capturing the overall visual scene of each frame, ignoring fine-grained argument-level information. Inspired by the definition of events as changes of states, we propose a novel framework to detect video events by tracking the changes in the visual states of all involved arguments, which are expected to provide the most informative evidence for the extraction of video events. In order to capture the visual state changes of arguments, we decompose them into changes in pixels within objects, displacements of objects, and interactions among multiple arguments. We further propose Object State Embedding, Object Motion-aware Embedding and Argument Interaction Embedding to encode and track these changes respectively. Experiments on various video event extraction tasks demonstrate significant improvements compared to state-of-the-art models. In particular, on verb classification, we achieve 3.49% absolute gains (19.53% relative gains) in F1@5 on Video Situation Recognition.
translated by 谷歌翻译
Among current anchor-based detectors, a positive anchor box will be intuitively assigned to the object that overlaps it the most. The assigned label to each anchor will directly determine the optimization direction of the corresponding prediction box, including the direction of box regression and category prediction. In our practice of crowded object detection, however, the results show that a positive anchor does not always regress toward the object that overlaps it the most when multiple objects overlap. We name it anchor drift. The anchor drift reflects that the anchor-object matching relation, which is determined by the degree of overlap between anchors and objects, is not always optimal. Conflicts between the fixed matching relation and learned experience in the past training process may cause ambiguous predictions and thus raise the false-positive rate. In this paper, a simple but efficient adaptive two-stage anchor assignment (TSAA) method is proposed. It utilizes the final prediction boxes rather than the fixed anchors to calculate the overlap degree with objects to determine which object to regress for each anchor. The participation of the prediction box makes the anchor-object assignment mechanism adaptive. Extensive experiments are conducted on three classic detectors RetinaNet, Faster-RCNN and YOLOv3 on CrowdHuman and COCO to evaluate the effectiveness of TSAA. The results show that TSAA can significantly improve the detectors' performance without additional computational costs or network structure changes.
translated by 谷歌翻译
联合学习是一种来自分散数据集的培训模型的新兴技术。在许多应用程序中,参与联合学习系统的数据所有者不仅拥有数据,还拥有一组域知识。这些知识包括人类的知识和工艺,对联邦学习任务非常有帮助。在这项工作中,我们提出了一个联合学习框架,该框架允许注入参与者的领域知识,其中关键思想是通过本地知识来完善全球模型。我们认为的方案是由真正的行业级应用激励的,我们证明了我们采用该应用的有效性。
translated by 谷歌翻译
图形神经网络(GNNS)已成为处理机器学习任务的有效方法,它为构建推荐系统带来了一种新方法,其中可以将推荐任务作为用户 - 项目的链接预测问题提出, 。培训基于GNN的推荐系统(GNNRECSYS)在大图上会引起大型内存足迹,很容易超过典型服务器上的DRAM容量。现有的解决方案诉诸分布式子图培训,这是由于动态构建子图和各个子图的大量冗余的高成本而效率低下。新兴的Intel Optane持久记忆使一台机器以可承受的成本具有最多6 TB的存储器,从而使单机器Gnnrecsys训练可行,从而消除了分布式培训中的效率低下。与DRAM相比,将Optane用于Gnnrecsys的一个主要问题是Optane相对较低的带宽。由于其主要的计算内核稀疏且内存访问密集,因此这种限制可能对Gnnrecsys工作量的高性能特别有害。为了了解Optane是否适合Gnnrecsys培训,我们对Gnnrecsys工作负载进行了深入的表征和全面的基准测试研究。我们的基准测试结果表明,经过正确配置后,基于Optane的单机器GNNRECSYS训练优于大幅度的培训,尤其是在处理深度GNN模型时。我们分析了加速度的来源,提供有关如何为GNNRECSYS工作负载配置Optane的指导,并讨论进一步优化的机会。
translated by 谷歌翻译
In recent years, interest has arisen in using machine learning to improve the efficiency of automatic medical consultation and enhance patient experience. In this article, we propose two frameworks to support automatic medical consultation, namely doctor-patient dialogue understanding and task-oriented interaction. We create a new large medical dialogue dataset with multi-level finegrained annotations and establish five independent tasks, including named entity recognition, dialogue act classification, symptom label inference, medical report generation and diagnosis-oriented dialogue policy. We report a set of benchmark results for each task, which shows the usability of the dataset and sets a baseline for future studies. Both code and data is available from https://github.com/lemuria-wchen/imcs21.
translated by 谷歌翻译
最近已经提出了3D车道检测的方法,以解决许多自动驾驶场景(上坡/下坡,颠簸等)中不准确的车道布局问题。先前的工作在复杂的情况下苦苦挣扎,因为它们对前视图和鸟类视图(BEV)之间的空间转换以及缺乏现实数据集的简单设计。在这些问题上,我们介绍了Persformer:具有新型基于变压器的空间特征变换模块的端到端单眼3D车道检测器。我们的模型通过参考摄像头参数来参与相关的前视本地区域来生成BEV功能。 Persformer采用统一的2D/3D锚设计和辅助任务,以同时检测2D/3D车道,从而提高功能一致性并分享多任务学习的好处。此外,我们发布了第一个大型现实世界3D车道数据集之一:OpenLane,具有高质量的注释和场景多样性。 OpenLane包含200,000帧,超过880,000个实例级别的车道,14个车道类别,以及场景标签和封闭式对象注释,以鼓励开发车道检测和更多与工业相关的自动驾驶方法。我们表明,在新的OpenLane数据集和Apollo 3D Lane合成数据集中,Persformer在3D车道检测任务中的表现明显优于竞争基线,并且在OpenLane上的2D任务中也与最新的算法相当。该项目页面可在https://github.com/openperceptionx/persformer_3dlane上找到,OpenLane数据集可在https://github.com/openperceptionx/openlane上提供。
translated by 谷歌翻译
当前的预训练语言模型(PLM)通常是通过静态数据训练的,忽略了在现实情况下,各种来源的流数据可能会不断增长。这要求PLM终生整合来自所有来源的信息。尽管可以通过对所有现有数据进行详尽的预培训来实现此目标,但已知该过程在计算上是昂贵的。为此,我们提出了Elle,目的是为新兴数据有效终身预训练。具体而言,ELLE由(1)函数保留的模型扩展组成,它们灵活地扩展了现有的PLM的宽度和深度以提高知识获取的效率; (2)预先训练的领域提示,它消除了在预训练期间学习的多功能知识,并刺激了下游任务的适当知识。我们通过来自BERT和GPT上5个域的流数据进行实验。结果表明,在预训练效率和下游性能中,ELLE的优越性超过了各种终身学习基线。这些代码可在https://github.com/thunlp/elle上公开获得。
translated by 谷歌翻译
顺序推荐是推荐系统的广泛流行的主题。现有的作品有助于提高基于各种方法的顺序推荐系统的预测能力,例如经常性网络和自我关注机制。然而,他们未能发现和区分项目之间的各种关系,这可能是激励用户行为的潜在因素。在本文中,我们提出了一个边缘增强的全面解散图神经网络(EGD-GNN)模型,以捕获全局项目表示和本地用户意图学习项目之间的关系信息。在全球级别,我们通过所有序列构建全局链接图来模拟项目关系。然后,频道感知的解缠绕学习层被设计成将边缘信息分解为不同的信道,这可以聚合以将目标项从其邻居表示。在本地层面,我们应用一个变化的自动编码器框架来学习用户在当前序列上的意图。我们在三个现实世界数据集中评估我们提出的方法。实验结果表明,我们的模型可以通过最先进的基线获得至关重要的改进,能够区分项目特征。
translated by 谷歌翻译
The application of natural language processing (NLP) to cancer pathology reports has been focused on detecting cancer cases, largely ignoring precancerous cases. Improving the characterization of precancerous adenomas assists in developing diagnostic tests for early cancer detection and prevention, especially for colorectal cancer (CRC). Here we developed transformer-based deep neural network NLP models to perform the CRC phenotyping, with the goal of extracting precancerous lesion attributes and distinguishing cancer and precancerous cases. We achieved 0.914 macro-F1 scores for classifying patients into negative, non-advanced adenoma, advanced adenoma and CRC. We further improved the performance to 0.923 using an ensemble of classifiers for cancer status classification and lesion size named entity recognition (NER). Our results demonstrated the potential of using NLP to leverage real-world health record data to facilitate the development of diagnostic tests for early cancer prevention.
translated by 谷歌翻译
Motion prediction is highly relevant to the perception of dynamic objects and static map elements in the scenarios of autonomous driving. In this work, we propose PIP, the first end-to-end Transformer-based framework which jointly and interactively performs online mapping, object detection and motion prediction. PIP leverages map queries, agent queries and mode queries to encode the instance-wise information of map elements, agents and motion intentions, respectively. Based on the unified query representation, a differentiable multi-task interaction scheme is proposed to exploit the correlation between perception and prediction. Even without human-annotated HD map or agent's historical tracking trajectory as guidance information, PIP realizes end-to-end multi-agent motion prediction and achieves better performance than tracking-based and HD-map-based methods. PIP provides comprehensive high-level information of the driving scene (vectorized static map and dynamic objects with motion information), and contributes to the downstream planning and control. Code and models will be released for facilitating further research.
translated by 谷歌翻译